1,231 research outputs found
Deformable Registration through Learning of Context-Specific Metric Aggregation
We propose a novel weakly supervised discriminative algorithm for learning
context specific registration metrics as a linear combination of conventional
similarity measures. Conventional metrics have been extensively used over the
past two decades and therefore both their strengths and limitations are known.
The challenge is to find the optimal relative weighting (or parameters) of
different metrics forming the similarity measure of the registration algorithm.
Hand-tuning these parameters would result in sub optimal solutions and quickly
become infeasible as the number of metrics increases. Furthermore, such
hand-crafted combination can only happen at global scale (entire volume) and
therefore will not be able to account for the different tissue properties. We
propose a learning algorithm for estimating these parameters locally,
conditioned to the data semantic classes. The objective function of our
formulation is a special case of non-convex function, difference of convex
function, which we optimize using the concave convex procedure. As a proof of
concept, we show the impact of our approach on three challenging datasets for
different anatomical structures and modalities.Comment: Accepted for publication in the 8th International Workshop on Machine
Learning in Medical Imaging (MLMI 2017), in conjunction with MICCAI 201
DRINet for medical image segmentation
Convolutional neural networks (CNNs) have revolutionized medical image analysis over the past few years. The UNet architecture is one of the most well-known CNN architectures for semantic segmentation and has achieved remarkable successes in many different medical image segmentation applications. The U-Net architecture consists of standard convolution layers, pooling layers, and upsampling layers. These convolution layers learn representative features of input images and construct segmentations based on the features. However, the features learned by standard convolution layers are not distinctive when the differences among different categories are subtle in terms of intensity, location, shape, and size. In this paper, we propose a novel CNN architecture, called Dense-Res-Inception Net (DRINet), which addresses this challenging problem. The proposed DRINet consists of three blocks, namely a convolutional block with dense connections, a deconvolutional block with residual Inception modules, and an unpooling block. Our proposed architecture outperforms the U-Net in three different challenging applications, namely multi-class segmentation of cerebrospinal fluid (CSF) on brain CT images, multi-organ segmentation on abdominal CT images, multi-class brain tumour segmentation on MR images
Concurrent ischemic lesion age estimation and segmentation of CT brain using a transformer-based network
The cornerstone of stroke care is expedient management that varies depending on the time since stroke onset. Consequently, clinical decision making is centered on accurate knowledge of timing and often requires a radiologist to interpret Computed Tomography (CT) of the brain to confirm the occurrence and age of an event. These tasks are particularly challenging due to the subtle expression of acute ischemic lesions and the dynamic nature of their appearance. Automation efforts have not yet applied deep learning to estimate lesion age and treated these two tasks independently, so, have overlooked their inherent complementary relationship. To leverage this, we propose a novel end-to-end multi-task transformer-based network optimized for concurrent segmentation and age estimation of cerebral ischemic lesions. By utilizing gated positional self-attention and CT-specific data augmentation, the proposed method can capture long-range spatial dependencies while maintaining its ability to be trained from scratch under low-data regimes commonly found in medical imaging. Furthermore, to better combine multiple predictions, we incorporate uncertainty by utilizing quantile loss to facilitate estimating a probability density function of lesion age. The effectiveness of our model is then extensively evaluated on a clinical dataset consisting of 776 CT images from two medical centers. Experimental results demonstrate that our method obtains promising performance, with an area under the curve (AUC) of 0.933 for classifying lesion ages ≤4.5 hours compared to 0.858 using a conventional approach, and outperforms task-specific state-of-the-art algorithms
Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network
Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods
Recommended from our members
Deep learning for cardiac image segmentation: A review
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US) and major anatomical structures of interest (ventricles, atria and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research
Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network
Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods
Recommended from our members
Large-scale Quality Control of Cardiac Imaging in Population Studies: Application to UK Biobank
In large population studies such as the UK Biobank (UKBB), quality control of the acquired images by visual assessment is unfeasible. In this paper, we apply a recently developed fully-automated quality control pipeline for cardiac MR (CMR) images to the first 19,265 short-axis (SA) cine stacks from the UKBB. We present the results for the three estimated quality metrics (heart coverage, inter-slice motion and image contrast in the cardiac region) as well as their potential associations with factors including acquisition details and subject-related phenotypes. Up to 14.2% of the analysed SA stacks had sub-optimal coverage (i.e. missing basal and/or apical slices), however most of them were limited to the first year of acquisition. Up to 16% of the stacks were affected by noticeable inter-slice motion (i.e. average inter-slice misalignment greater than 3.4 mm). Inter-slice motion was positively correlated with weight and body surface area. Only 2.1% of the stacks had an average end-diastolic cardiac image contrast below 30% of the dynamic range. These findings will be highly valuable for both the scientists involved in UKBB CMR acquisition and for the ones who use the dataset for research purposes
Stratified decision forests for accurate anatomical landmark localization in cardiac images
Accurate localization of anatomical landmarks is an important step in medical imaging, as it provides useful prior information for subsequent image analysis and acquisition methods. It is particularly useful for initialization of automatic image analysis tools (e.g. segmentation and registration) and detection of scan planes for automated image acquisition. Landmark localization has been commonly performed using learning based approaches, such as classifier and/or regressor models. However, trained models may not generalize well in heterogeneous datasets when the images contain large differences due to size, pose and shape variations of organs. To learn more data-adaptive and patient specific models, we propose a novel stratification based training model, and demonstrate its use in a decision forest. The proposed approach does not require any additional training information compared to the standard model training procedure and can be easily integrated into any decision tree framework. The proposed method is evaluated on 1080 3D highresolution and 90 multi-stack 2D cardiac cine MR images. The experiments show that the proposed method achieves state-of-theart landmark localization accuracy and outperforms standard regression and classification based approaches. Additionally, the proposed method is used in a multi-atlas segmentation to create a fully automatic segmentation pipeline, and the results show that it achieves state-of-the-art segmentation accuracy
Prior-based Coregistration and Cosegmentation
We propose a modular and scalable framework for dense coregistration and
cosegmentation with two key characteristics: first, we substitute ground truth
data with the semantic map output of a classifier; second, we combine this
output with population deformable registration to improve both alignment and
segmentation. Our approach deforms all volumes towards consensus, taking into
account image similarities and label consistency. Our pipeline can incorporate
any classifier and similarity metric. Results on two datasets, containing
annotations of challenging brain structures, demonstrate the potential of our
method.Comment: The first two authors contributed equall
- …